<?xml version="1.0" encoding="ISO-8859-1"?>
<metadatalist>
	<metadata ReferenceType="Conference Proceedings">
		<site>sibgrapi.sid.inpe.br 802</site>
		<holdercode>{ibi 8JMKD3MGPEW34M/46T9EHH}</holdercode>
		<identifier>8JMKD3MGPEW34M/43BFHL8</identifier>
		<repository>sid.inpe.br/sibgrapi/2020/09.30.14.16</repository>
		<lastupdate>2020:09.30.14.16.57 sid.inpe.br/banon/2001/03.30.15.38 administrator</lastupdate>
		<metadatarepository>sid.inpe.br/sibgrapi/2020/09.30.14.16.57</metadatarepository>
		<metadatalastupdate>2022:06.14.00.00.14 sid.inpe.br/banon/2001/03.30.15.38 administrator {D 2020}</metadatalastupdate>
		<doi>10.1109/SIBGRAPI51738.2020.00024</doi>
		<citationkey>Souza:2020:FeLeIm</citationkey>
		<title>Feature learning from image markers for object delineation</title>
		<format>On-line</format>
		<year>2020</year>
		<numberoffiles>1</numberoffiles>
		<size>2832 KiB</size>
		<author>de Souza, Italos Estilon da Silva,</author>
		<affiliation>University of Campinas</affiliation>
		<editor>Musse, Soraia Raupp,</editor>
		<editor>Cesar Junior, Roberto Marcondes,</editor>
		<editor>Pelechano, Nuria,</editor>
		<editor>Wang, Zhangyang (Atlas),</editor>
		<e-mailaddress>italosestilon@gmail.com</e-mailaddress>
		<conferencename>Conference on Graphics, Patterns and Images, 33 (SIBGRAPI)</conferencename>
		<conferencelocation>Porto de Galinhas (virtual)</conferencelocation>
		<date>7-10 Nov. 2020</date>
		<publisher>IEEE Computer Society</publisher>
		<publisheraddress>Los Alamitos</publisheraddress>
		<booktitle>Proceedings</booktitle>
		<tertiarytype>Full Paper</tertiarytype>
		<transferableflag>1</transferableflag>
		<versiontype>finaldraft</versiontype>
		<keywords>object delineation, convolutional neural networks, feature extraction.</keywords>
		<abstract>Convolutional neural networks (CNNs) have been used in several computer vision applications. However, most well-succeeded models are usually pre-trained on large labeled datasets. The adaptation of such models to new applications (or datasets) with no label information might be an issue, calling for the construction of a suitable model from scratch. In this paper, we introduce an interactive method to estimate CNN filters from image markers with no need for backpropagation and pre-trained models. The method, named FLIM (feature learning from image markers), exploits the user knowledge about image regions that discriminate objects for marker selection. For a given CNN's architecture and user-drawn markers in an input image, FLIM can estimate the CNN filters by clustering marker pixels in a layer-by-layer fashion -- i.e., the filters of a current layer are estimated from the output of the previous one. We demonstrate the advantages of FLIM for object delineation over alternatives based on a state-of-the-art pre-trained model and the Lab color space. The results indicate the potential of the method towards the construction of explainable CNN models.</abstract>
		<language>en</language>
		<targetfile>76.pdf</targetfile>
		<usergroup>italosestilon@gmail.com</usergroup>
		<visibility>shown</visibility>
		<documentstage>not transferred</documentstage>
		<mirrorrepository>sid.inpe.br/banon/2001/03.30.15.38.24</mirrorrepository>
		<nexthigherunit>8JMKD3MGPEW34M/43G4L9S</nexthigherunit>
		<nexthigherunit>8JMKD3MGPEW34M/4742MCS</nexthigherunit>
		<citingitemlist>sid.inpe.br/sibgrapi/2020/10.28.20.46 2</citingitemlist>
		<hostcollection>sid.inpe.br/banon/2001/03.30.15.38</hostcollection>
		<username>italosestilon@gmail.com</username>
		<agreement>agreement.html .htaccess .htaccess2</agreement>
		<lasthostcollection>sid.inpe.br/banon/2001/03.30.15.38</lasthostcollection>
		<url>http://sibgrapi.sid.inpe.br/rep-/sid.inpe.br/sibgrapi/2020/09.30.14.16</url>
	</metadata>
</metadatalist>